489 research outputs found

    Surface Registration for Pharyngeal Radiation Treatment Planning

    Get PDF
    Endoscopy is an in-body examination procedure that enables direct visualization of tumor spread on tissue surfaces. In the context of radiation treatment planning for throat cancer, there have been attempts to fuse this endoscopic information into the planning CT space for better tumor localization. One way to achieve this CT/Endoscope fusion is to first reconstruct a full 3D surface model from the endoscopic video and then register that surface into the CT space. These two steps both require an algorithm that can accurately register two or more surfaces. In this dissertation, I present a surface registration method I have developed, called Thin Shell Demons (TSD), for achieving the two goals mentioned above. There are two key aspects in TSD: geometry and mechanics. First, I develop a novel surface geometric feature descriptor based on multi-scale curvatures that can accurately capture local shape information. I show that the descriptor can be effectively used in TSD and other surface registration frameworks, such as spectral graph matching. Second, I adopt a physical thin shell model in TSD to produce realistic surface deformation in the registration process. I also extend this physical model for orthotropic thin shells and propose a probabilistic framework to learn orthotropic stiffness parameters from a group of known deformations. The anisotropic stiffness learning opens up a new perspective to shape analysis and allows more accurate surface deformation and registration in the TSD framework. Finally, I show that TSD can also be extended into a novel groupwise registration framework. The advantages of Thin Shell Demons allow us to build a complete 3D model of the throat, called an endoscopogram, from a group of single-frame-based reconstructions. It also allows us to register an endoscopogram to a CT segmentation surface, thereby allowing information transfer for treatment planning.Doctor of Philosoph

    Generating Realistic 3D Brain MRIs Using a Conditional Diffusion Probabilistic Model

    Full text link
    Training deep learning models on brain MRI is often plagued by small sample size, which can lead to biased training or overfitting. One potential solution is to synthetically generate realistic MRIs via generative models such as Generative Adversarial Network (GAN). However, existing GANs for synthesizing realistic brain MRIs largely rely on image-to-image conditioned transformations requiring extensive, well-curated pairs of MRI samples for training. On the other hand, unconditioned GAN models (i.e., those generating MRI from random noise) are unstable during training and tend to produce blurred images during inference. Here, we propose an efficient strategy that generates high fidelity 3D brain MRI via Diffusion Probabilistic Model (DPM). To this end, we train a conditional DPM with attention to generate an MRI sub-volume (a set of slices at arbitrary locations) conditioned on another subset of slices from the same MRI. By computing attention weights from slice indices and using a mask to encode the target and conditional slices, the model is able to learn the long-range dependency across distant slices with limited computational resources. After training, the model can progressively synthesize a new 3D brain MRI by generating the first subset of slices from random noise and conditionally generating subsequent slices. Based on 1262 t1-weighted MRIs from three neuroimaging studies, our experiments demonstrate that the proposed method can generate high quality 3D MRIs that share the same distribution as real MRIs and are more realistic than the ones produced by GAN-based models

    Unifying Token and Span Level Supervisions for Few-Shot Sequence Labeling

    Full text link
    Few-shot sequence labeling aims to identify novel classes based on only a few labeled samples. Existing methods solve the data scarcity problem mainly by designing token-level or span-level labeling models based on metric learning. However, these methods are only trained at a single granularity (i.e., either token level or span level) and have some weaknesses of the corresponding granularity. In this paper, we first unify token and span level supervisions and propose a Consistent Dual Adaptive Prototypical (CDAP) network for few-shot sequence labeling. CDAP contains the token-level and span-level networks, jointly trained at different granularities. To align the outputs of two networks, we further propose a consistent loss to enable them to learn from each other. During the inference phase, we propose a consistent greedy inference algorithm that first adjusts the predicted probability and then greedily selects non-overlapping spans with maximum probability. Extensive experiments show that our model achieves new state-of-the-art results on three benchmark datasets.Comment: Accepted by ACM Transactions on Information System
    • …
    corecore